-
Notifications
You must be signed in to change notification settings - Fork 532
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Augmented calls to yaml.load to use the safe loader. #3825
Closed
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This commit sets the PyTorch and TorchVision version to nightly release 2024-10-29. This commit also fixes the CI failure after this commit 54d9e24 got merged. The issue was that the CI checks in the PR were run before the previous roll pytorch update but the PR was actually merged after the roll pytorch update. Hence, the failure was not caught before merging the PR. While exporting the fx_graph through fx_importer for `rrelu` and `rrelu_with_noise` op for train mode, it decomposes the `aten.rrelu_with_noise` op based on the PyTorch decomposition which is the default behavior. However, the decomposition contains an input mutation specifically here https://github.com/pytorch/pytorch/blob/9bbe4a67ad137032add6a3b0b74bda66f5ef83d2/torch/_decomp/decompositions.py#L325, resulting in the runtime failure. This issue would probably be fixed by pytorch/pytorch#138503. Until then, the failing tests are added to the xfail set. Also, after the roll pytorch update following tests started passing for fx_importer, and fx_importer_stablehlo config. - "ElementwiseRreluTrainModule_basic" - "ElementwiseRreluTrainStaticModule_basic" - "ElementwiseRreluWithNoiseTrainModule_basic" - "ElementwiseRreluWithNoiseTrainStaticModule_basic" This commit also updates the dtype check for the `aten.linear` op since the op now expects both the input tensors to have the same dtype. Signed-Off By: Vivek Khandelwal <[email protected]>
Removes a boolean variable that is used only for an assertion, and inlines the condition into the assertion. Signed-off-by: Max Dawkins <[email protected]>
- bumps llvm-project to llvm/llvm-project@6c64c8a - bumps stablehlo to openxla/stablehlo@6e403b1 - Updates type conversion materialization functions to return Value after API change in llvm-project. --------- Signed-off-by: Max Dawkins <[email protected]>
- Add/Extend Torch to TOSA legalization for the following ops: + Add aten.threshold_backward + Fix aten.threshold + Re-implement aten.broadcast_to using tosa.reshape and tosa.tile + Add support for rank 0 index for aten.index_select + Fix aten.index_put.hacked_twin + Add aten.uniform + Add aten.logical_and - Update xfail_sets.py with new e2e results - Add LIT tests to basic.mlir for newly added ops Change-Id: I8910564a049d18293284fe2e55e82bc1d2cf10e3 Signed-off-by: Justin Ngo <[email protected]>
A non-persistent buffer will not be a part of this module’s `state_dict`. Hence when setting `experimental_support_mutation=True` and have non-persistent buffer, the current fx importer will fail to retrieve a value from `state_dict` and produce `torch.constant.none` to represent the buffer. This fix get value of non-persistent buffer from the module's `constants`. --------- Co-authored-by: Dixin Zhou <[email protected]>
# Tracking [Issue](nod-ai/SHARK-ModelDev#848) [TorchToLinalg Op Support](nod-ai/SHARK-ModelDev#347) # Description Aten_TrilinearOp is an implementation of a "trilinear einstein sum". Essentially, just an einsum across 3 tensors. There are a few inputs: ## Tensor Inputs - i1, i2, i3 - The three input tensors for the _trilinear op. ## Expands These inputs allow you to unsqueeze an input tensor at the specified dims as a pre-processing step to make the shapes compatible for the rest of the op: - expand1: List[int], expand2: List[int], expand3: List[int] ## sumdim - sumdim: List[int] - After applying element wise multiplication, the values in sumdim denote where to collapse a dimension by summing over it ## unroll_dim - unroll_dim: int - In the PyTorch implementation, this specifies a dimension where you could slice the input tensors, multiply and sum them, then concatenate the results in an output tensor. This complicates the implementation significantly, but doesn't change the result, so I opted against it. Along with that, a previously accepted path for solving this involved reusing the AtenEinsumOp, which also would also ignore this input. # Solution After trying a bunch of more complicated approaches for it, this op actually ended up being quite simple: [See _trilinear](https://dev-discuss.pytorch.org/t/defining-the-core-aten-opset/1464) `_trilinear = (i1.unsqueeze(expand1) * i2.unsqueeze(expand2) * i3.unsqueeze(expand3)).sum(sumdim)` Wish I saw this earlier, but watcha gonna do: 🙃 ## Not Reusing AtenEinsumOp Frankly, I found multiple cases where valid inputs would have numerical mismatches for EinsumOp, even when running tests against EinsumOp directly. I think it has something to do with the singleton dimensions. Will need to look into this further, but once I realized the simplified approach, it appeared to be more reliable and much simpler. Either way (credit to @zjgarvey), there are improvements to the einsum op here. When I was originally trying to use the op, intermediate tensors were being flattened properly, but then its 0th dimension was being cast from a static dim to a dynamic dim due to integers not folding correctly in the MLIR. Figured it's worth keeping these improvements for future reusers of EinsumOp. # The zero'd out dim "bug" For some reason, if you specify a dimension in all `expands`, ```i.e. [expand1=[0], expand2=[0], expand3=[0]], [expand1=[1], expand2=[1], expand3=[1]] ``` The _trilinear op would specify `0` for that dimension in the output shape, unless it was also included in `sumdim`. This goes against the implementation of torch.einsum: ``` >>> a, b, c = [torch.rand(1, 3, 3, 3) for i in range(3)] # Simulate expand at dim=0 for all input tensors >>> torch.einsum('abcd,abcd,abcd->abcd', a, b, c).shape torch.Size([1, 3, 3, 3]) ``` And is just straight up incorrect mathematically. I considered "replacing" singleton dims with zeroed out dims, but that seemed like carrying over a bug. Instead, I included a test for the case, verified that the singleton dimensions were handled the way that torch.einsum handles it, instead of torch._trilinear, and xfailed it with a note as to why.
Addition of bools saturate which equates to an `or` operator. Updated to avoid some noticed downstream failures.
Attention often broadcasts a mask across the batch dimension as masking is usually performed the same across attention heads. Added this materialization to the mask dimensions optionally.
…rs (#3822) The existing TorchToTosa lowering logic for `torch.aten.avg_pool2d` doesn't handle some unsupported properties well, leading to a silent wrong answer(SWA) when we go through `torch-backend-to-tosa-backend-pipeline.` For instance, with the existing TOSA avgpool2d specification, we can not represent `count_include_pad` and `divisor_override,` so during TorchToTosa lowering, we silently ignore these properties which leads to SWA in some cases—the fix captured in this change errors for unsupported scenarios. For details on `count_include_pad` and `divisor_override,` please see the below link. https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html --------- Co-authored-by: Hanumanth Hanumantharayappa <[email protected]>
I think we should use template parameters. @yyp0 @qingyunqu
Added lit tests since these scalar operations don't trace well through the `fx_importer` route. `XOR` and `NE` are equivalent binary operators, so `aten.ne.bool` is lowered to `arith.xori`.
The onnx ingest sometimes has poorly propagated shape information. E.g.: ```mlir ... %9020 = torch.prims.squeeze %9010#1, %9019 : !torch.vtensor<[?,384,1],f32>, !torch.list<int> -> !torch.vtensor<[1,384],f32> return %9015, %9020 : !torch.vtensor<[1,384],f32>, !torch.vtensor<[1,384],f32> } } ``` This occurred at the boundary of the onnx model `migraphx_bert__bert-large-uncased`. Evidently, the output value tensor info had more information than could be propagated forward. The `PrimsSqueeze` lowering was returning a `!torch.vtensor<[?,384],f32>` which was causing a type mismatch with the `func.return`.
) 1. Adds case handling for `aten.slice.tensor` shape inference with negative strides. This is not technically allowed by native pytorch, but it is useful for ONNX ingest. We were getting some incorrect shapes for these negative strided slice ops. 2. Adds scalarization support for ops seen in pytorch pad exports to ONNX. These are typically `aten.view` `aten.transpose.int` and `aten.slice.Tensor` with negative strides (and rank 2). 3. Allows view op `self` to be added to the worklist conditionally, based on whether the view op actually occurs as a middle point in a shape computation.
1. Clamps OOB start index to 0 in slice folder 2. Adds a more descriptive `emitError` in slice folder if the creation of the `DenseElementsAttr` would fail due to a bad result shape. 3. Fixes the `onnx.Shape` lowering to default to `inputRank` for `end` instead of `-1`. When `end==-1` the last element was missing when slicing.
Add tosa::getConstTensor with int8_t template used in #3827
- Add Torch to TOSA legalization for aten.as_strided op - Update xfail_sets with the following: + New aten.as_strided results + Changes from this commit: 7f9f99c + Failed tests from new PyTorch version update - Add new LIT test to basic.mlir Change-Id: I6f471ea116ca47f2bf9537b62950fce75a2c624f Signed-off-by: Justin Ngo <[email protected]>
…s gather/scatter (#3829) In torch.index_put like ops, `values` is only required to be broadcastable to `input[indices]`, rather than exact dimension match. This patch fixes the problem by add additional stablehlo.dynamic_broadcast_in_dim before creating stablehlo.scatter op. BTW, this patch also enhance the `getBroadcastResultShape` utility in hlo namespace.
For the op `index_put_`, if accumulate == false, the behavior is undefined if the indicies aren't unique (https://pytorch.org/docs/stable/generated/torch.Tensor.index_put_.html). So, when converting `AtenIndexPutHackedTwinOp` to a TMTensor scatter op, mark the indices as unique if when `accumulate == false`. This should have no functional effect (unless users are relying on UB) and assuming unique indices has the benefit of unlocking better optimizations in further compiler stages. Signed-off-by: Ian Wood <[email protected]>
also fix `aten.exp2` with integer type
Accidentally merged with other commits that aren't mine... I'll just start over. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.